Development of The Steepest Descent Method for Unconstrained Optimization of Nonlinear Function

نویسندگان

چکیده

The q-gradient method used a Yuan step size for odd steps, and geometric recursion as an even (q-GY). This study aimed to accelerate convergence minimum point by minimizing the number of iterations, dilating parameter q independent variable then comparing results with three algorithms namely, classical steepest descent (SD) method, Steps (SDY), (q-G). numerical were presented in tables graphs. Rosenbrock function f(x)=〖(1-x_1)〗^2+100〖(x_2-〖x_1〗^2)〗^2 determined μ=1,σ_0=0.5,β=0.999, starting (x_0) uniform distribution on interval x_0= (-2.048, 2.048) R^2, 49 points executed using Python online compiler 64bit core i3 laptop. maximum iterations was 58,679. Using tolerance limits stopping criteria is 10-4 inequality 〖f(x〗^*)>f get results. q-GY down ward movement towards better than SD SDY methods while showed good enough performance increase

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...

متن کامل

A Modified Algorithm of Steepest Descent Method for Solving Unconstrained Nonlinear Optimization Problems

The steepest descent method (SDM), which can be traced back to Cauchy (1847), is the simplest gradient method for unconstrained optimization problem. The SDM is effective for well-posed and low-dimensional nonlinear optimization problems without constraints; however, for a large-dimensional system, it converges very slowly. Therefore, a modified steepest decent method (MSDM) is developed to dea...

متن کامل

Steepest Descent Preconditioning for Nonlinear GMRES Optimization

Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N-GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, while the second employs a predefined small step. A simple global convergence proof is provided for the NGMRES optimi...

متن کامل

A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization

In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.

متن کامل

On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(ǫ) to drive the norm of the gradient below ǫ. This shows that the upper bound of O(ǫ) evaluations known for the steepest descent is tight, and that Newton’s method may be as slow a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Sinkron : jurnal dan penelitian teknik informatika

سال: 2022

ISSN: ['2541-2019', '2541-044X']

DOI: https://doi.org/10.33395/sinkron.v7i3.11596